Skip to content

fix(openai-agents): apply content tracing flag to content#3487

Merged
galkleinman merged 2 commits intotraceloop:mainfrom
duanyutong:fix-openai-agents-content
Jan 29, 2026
Merged

fix(openai-agents): apply content tracing flag to content#3487
galkleinman merged 2 commits intotraceloop:mainfrom
duanyutong:fix-openai-agents-content

Conversation

@duanyutong
Copy link
Copy Markdown
Contributor

@duanyutong duanyutong commented Dec 2, 2025

Changes

  • should_send_prompts is defined in openai-agents instrumentation but unused; use it to control content logging in a way similar to the openai package
  • make should_send_prompts implementations in the two packages consistent (they had different behaviours)
  • apply _is_truthy to both the env var and the trace context override variable for consistency and maximum compatibility.
  • I have added tests that cover my changes.
  • If adding a new instrumentation or changing an existing one, I've added screenshots from some observability platform showing the change.
  • PR name follows conventional commits format: feat(instrumentation): ... or fix(instrumentation): ....
  • (If applicable) I have updated the documentation accordingly.

Important

Enable content tracing in openai-agents by using should_send_prompts() and ensure consistency with openai package.

  • Behavior:
    • Use should_send_prompts() in _hooks.py to control content logging for OpenAI Agents.
    • Apply _is_truthy to both environment variable and trace context override in should_send_prompts() for consistency.
  • Consistency:
    • Align should_send_prompts() behavior in openai_agents/utils.py and openai/utils.py.

This description was created by Ellipsis for 77fa6f4. You can customize this summary. It will automatically update as commits are pushed.

Summary by CodeRabbit

  • New Features

    • Configurable content tracing added: users can now enable/disable inclusion of prompts, responses, tool arguments, and realtime outputs in traces.
  • Behavior Changes

    • Tracing now respects the content-tracing setting to prevent accidental capture of prompt/response content while preserving public API compatibility.
  • Documentation

    • CONTRIBUTING.md updated with detailed local testing, linting, and per-package development instructions.

✏️ Tip: You can customize this high-level summary in your review settings.

@CLAassistant
Copy link
Copy Markdown

CLAassistant commented Dec 2, 2025

CLA assistant check
All committers have signed the CLA.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Dec 2, 2025

Warning

Rate limit exceeded

@duanyutong has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 10 minutes and 31 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📝 Walkthrough

Walkthrough

Adds explicit content-tracing gating: new truthiness helper and trace-content constant; updated should_send_prompts() returns bool; instrumentation hooks now check trace_content to conditionally emit prompt, response, tool, and realtime span content.

Changes

Cohort / File(s) Summary
Tracing utils
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py, packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py
Added _is_truthy() helper; introduced _TRACELOOP_TRACE_CONTENT constant; updated should_send_prompts() -> bool with docstring and normalized truthiness checks (env var + override).
Instrumentation hooks
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py
Imported should_send_prompts(); added trace_content gating to span emission paths. Prompt content, tool call arguments, response content, and realtime (speech/transcription/group) emissions now conditional on trace_content.
Documentation
CONTRIBUTING.md
Added "Local Testing and Linting in this Repo" section with npm/Python per-package setup and nx-based commands for local development and linting.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐰 I nibble at flags and truths so bright,

content sleeps unless the trace gives light.
Helpers hum softly, gates click in line,
prompts and echoes only pass when fine. ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 60.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'fix(openai-agents): apply content tracing flag to content' is directly related to the main changes in the PR, which involve applying the should_send_prompts() flag to control content logging across multiple emission paths in the openai-agents instrumentation.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@ellipsis-dev ellipsis-dev Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed everything up to 77fa6f4 in 1 minute and 43 seconds. Click for details.
  • Reviewed 134 lines of code in 3 files
  • Skipped 0 files when reviewing.
  • Skipped posting 8 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py:298
  • Draft comment:
    Guarding content logging by checking should_send_prompts() ensures that sensitive prompt data is only recorded when permitted, aligning this behavior with the OpenAI instrumentation.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative and does not provide any actionable feedback or suggestions for improvement. It simply states that the current implementation aligns with OpenAI instrumentation, which is not necessary for the PR author to know.
2. packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py:399
  • Draft comment:
    The added check with should_send_prompts() before processing 'output.content' prevents logging of content when tracing is disabled, maintaining consistency with other parts of the code.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative, as it only praises the addition of a check without providing any actionable feedback or suggestions for improvement. It does not align with the guidelines for useful comments.
3. packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py:430
  • Draft comment:
    Adding the should_send_prompts() condition on the branch handling direct text output safeguards against unintended logging when content tracing is disabled.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative and does not provide any actionable feedback or suggestions for improvement. It simply states that a change is good, which violates the rule against making purely informative comments.
4. packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py:466
  • Draft comment:
    Guarding the legacy prompt extraction with should_send_prompts() ensures that prompt data from input is only logged when allowed, enhancing consistency across the tracing logic.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative, as it praises the use of should_send_prompts() without suggesting any changes or improvements. It doesn't ask for confirmation or suggest any specific action.
5. packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py:508
  • Draft comment:
    Applying the should_send_prompts() check here before processing output content ensures that sensitive response content is only recorded when appropriate.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative and does not provide any actionable feedback or suggestions for improvement. It simply acknowledges a good practice without offering any specific advice or raising any concerns.
6. packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py:539
  • Draft comment:
    The should_send_prompts() condition now also guards the branch for direct text outputs, promoting consistent control over sensitive data logging.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative, as it praises a change without providing any actionable feedback or suggestions. It does not align with the rules for useful comments.
7. packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py:23
  • Draft comment:
    Refactoring the should_send_prompts() function with explicit type hints and applying _is_truthy uniformly improves clarity and consistency. This change ensures broader support for various truthy values.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative, praising the refactoring without providing any actionable feedback or suggestions. It doesn't ask for confirmation or suggest improvements, so it doesn't align with the rules for useful comments.
8. packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py:181
  • Draft comment:
    Aligning the should_send_prompts() implementation to use _is_truthy for both the environment variable and the context override brings consistency with the openai-agents instrumentation.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative, explaining the reason behind a change without suggesting any action or asking for confirmation. It doesn't align with the rules for useful comments.

Workflow ID: wflow_nroeXO8JV5fdsBQi

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (1)

399-443: Consider refactoring duplicated response processing logic.

There is significant code duplication between lines 399-443 and 508-552. Both blocks handle response output processing (extracting content from ResponseOutputMessage, tool calls, and direct text content) with nearly identical logic. Consider extracting this into a helper method to improve maintainability.

Note: This is pre-existing duplication, not introduced by this PR.

Example refactoring approach:

def _process_response_output(self, otel_span, response, prefix_attr):
    """Extract and set response output attributes."""
    if not (hasattr(response, 'output') and response.output):
        return
        
    for i, output in enumerate(response.output):
        if should_send_prompts() and hasattr(output, 'content') and output.content:
            # Text message with content array (ResponseOutputMessage)
            content_text = "".join(
                content_item.text for content_item in output.content 
                if hasattr(content_item, 'text')
            )
            if content_text:
                otel_span.set_attribute(f"{prefix_attr}.{i}.content", content_text)
                otel_span.set_attribute(f"{prefix_attr}.{i}.role", 
                    getattr(output, 'role', 'assistant'))
        # ... rest of logic

Also applies to: 508-552

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between c8c1553 and 77fa6f4.

📒 Files selected for processing (3)
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (7 hunks)
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (2 hunks)
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.py: Store API keys only in environment variables/secure vaults; never hardcode secrets in code
Use Flake8 for code linting and adhere to its rules

Files:

  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py
🧬 Code graph analysis (3)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (1)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (2)
  • dont_throw (52-78)
  • should_send_prompts (23-30)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (1)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (2)
  • should_send_prompts (181-188)
  • _is_truthy (177-178)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (14)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (2)
  • _is_truthy (19-20)
  • should_send_prompts (23-30)
packages/opentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/utils.py (1)
  • should_send_prompts (49-52)
packages/opentelemetry-instrumentation-llamaindex/opentelemetry/instrumentation/llamaindex/utils.py (1)
  • should_send_prompts (32-35)
packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/utils.py (1)
  • should_send_prompts (28-31)
packages/opentelemetry-instrumentation-groq/opentelemetry/instrumentation/groq/utils.py (1)
  • should_send_prompts (25-28)
packages/opentelemetry-instrumentation-cohere/opentelemetry/instrumentation/cohere/utils.py (1)
  • should_send_prompts (38-41)
packages/opentelemetry-instrumentation-vertexai/opentelemetry/instrumentation/vertexai/utils.py (1)
  • should_send_prompts (11-14)
packages/opentelemetry-instrumentation-bedrock/opentelemetry/instrumentation/bedrock/utils.py (1)
  • should_send_prompts (36-39)
packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/utils.py (1)
  • should_send_prompts (14-17)
packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/utils.py (1)
  • should_send_prompts (11-14)
packages/opentelemetry-instrumentation-mistralai/opentelemetry/instrumentation/mistralai/utils.py (1)
  • should_send_prompts (11-14)
packages/opentelemetry-instrumentation-replicate/opentelemetry/instrumentation/replicate/utils.py (1)
  • should_send_prompts (11-14)
packages/opentelemetry-instrumentation-transformers/opentelemetry/instrumentation/transformers/utils.py (1)
  • should_send_prompts (11-14)
packages/opentelemetry-instrumentation-haystack/opentelemetry/instrumentation/haystack/utils.py (1)
  • should_send_prompts (21-24)
🔇 Additional comments (6)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (2)

18-18: LGTM: Private constant follows naming conventions.

The constant is appropriately prefixed with _ to indicate internal use.


181-188: Verify consistency with other instrumentation packages before merging.

This PR updates should_send_prompts() in the openai and openai-agents packages to use _is_truthy() for both environment variable and override checks. Before merging, confirm:

  1. Whether other instrumentation packages (langchain, llamaindex, anthropic, groq, cohere, vertexai, bedrock, watsonx, ollama, mistralai, replicate, transformers, haystack) use a different pattern for the same function
  2. If an inconsistency exists, decide whether to:
    • Update all packages in this PR
    • Create a follow-up issue to standardize across all packages
    • Document why only these two packages are being updated
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (2)

9-9: LGTM: Consistent with openai package.

The private constant naming is consistent with the openai package implementation.


23-30: LGTM: Implementation consistent with openai package.

The function signature, docstring, and implementation are now consistent with the openai package. The use of _is_truthy() for both environment variable and context override provides uniform truthiness evaluation.

packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (2)

14-14: LGTM: Import enables content tracing control.

The import of should_send_prompts enables the gating mechanism for content tracing as intended by this PR.


301-301: LGTM: Content tracing guards correctly implemented.

The should_send_prompts() guards are correctly placed to control content and prompt attribute emission. The implementation correctly checks the flag before setting content-related attributes across all code paths.

Also applies to: 402-402, 433-433, 469-469, 511-511, 542-542

@duanyutong duanyutong force-pushed the fix-openai-agents-content branch from 77fa6f4 to d034a5c Compare December 3, 2025 17:23
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (1)

301-305: Content emission is correctly gated; consider caching should_send_prompts() per span.

The new checks around prompt and completion content ensure that only when should_send_prompts() evaluates truthy do you emit potentially sensitive text, while still recording non-content metadata (roles, usage, etc.) unconditionally. This aligns with the stated PR goal of applying the content tracing flag across Agents paths.

For a small perf/clarity win, you could compute the flag once in on_span_end and reuse it instead of calling should_send_prompts() multiple times inside loops:

@@
-        if span in self._otel_spans:
-            otel_span = self._otel_spans[span]
-            span_data = getattr(span, 'span_data', None)
-            if span_data and (
+        if span in self._otel_spans:
+            otel_span = self._otel_spans[span]
+            span_data = getattr(span, 'span_data', None)
+            send_prompts = should_send_prompts()
+            if span_data and (
@@
-                        # Set content attribute
-                        if should_send_prompts() and content is not None:
+                        # Set content attribute
+                        if send_prompts and content is not None:
@@
-                            # Handle different output types
-                            if should_send_prompts() and hasattr(output, 'content') and output.content:
+                            # Handle different output types
+                            if send_prompts and hasattr(output, 'content') and output.content:
@@
-                            elif should_send_prompts() and hasattr(output, 'text'):
+                            elif send_prompts and hasattr(output, 'text'):
@@
-                input_data = getattr(span_data, 'input', [])
-                if should_send_prompts() and input_data:
+                input_data = getattr(span_data, 'input', [])
+                if send_prompts and input_data:
@@
-                            # Handle different output types
-                            if should_send_prompts() and hasattr(output, 'content') and output.content:
+                            # Handle different output types
+                            if send_prompts and hasattr(output, 'content') and output.content:
@@
-                            elif should_send_prompts() and hasattr(output, 'text'):
+                            elif send_prompts and hasattr(output, 'text'):

This keeps behavior identical while avoiding repeated env/context lookups inside the hot loops.

Also applies to: 402-415, 433-439, 469-484, 511-524, 542-548

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 77fa6f4 and d034a5c.

📒 Files selected for processing (4)
  • README.md (1 hunks)
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (7 hunks)
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (2 hunks)
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.py: Store API keys only in environment variables/secure vaults; never hardcode secrets in code
Use Flake8 for code linting and adhere to its rules

Files:

  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py
🧠 Learnings (1)
📚 Learning: 2025-12-02T21:09:48.690Z
Learnt from: duanyutong
Repo: traceloop/openllmetry PR: 3487
File: packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py:177-178
Timestamp: 2025-12-02T21:09:48.690Z
Learning: The opentelemetry-instrumentation-openai and opentelemetry-instrumentation-openai-agents packages must remain independent and not share code, so code duplication between them is acceptable.

Applied to files:

  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py
🧬 Code graph analysis (1)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (2)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (1)
  • should_send_prompts (23-30)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (1)
  • should_send_prompts (181-188)
🔇 Additional comments (3)
README.md (1)

183-192: Verify no duplicate test setup instructions elsewhere in the README.

The summary indicates this pre-test setup instruction block is inserted in two locations within the Contributing section, duplicating the guidance. However, only one instance is visible in the provided code at lines 183-192. This discrepancy should be verified to ensure no unintended duplication exists elsewhere in the file.

packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (1)

177-188: Content-tracing flag logic looks solid; confirm intended OR semantics and default.

_is_truthy() gives robust normalization for both the env var and the trace-context override, and should_send_prompts() now behaves consistently with the agents-side helper. One subtle point: with env_setting defaulting to "true" and using OR, the context override can enable tracing when the env is falsy but cannot disable it when the env (or default) is truthy. Please confirm this precedence and default-on behavior is intentional for your privacy/config story.

packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (1)

14-14: Importing should_send_prompts into hooks is appropriate.

Pulling in should_send_prompts from .utils cleanly wires the hooks into the shared content-tracing decision without expanding this module’s responsibilities.

@duanyutong duanyutong force-pushed the fix-openai-agents-content branch 2 times, most recently from 32e4fc9 to 8f64e4e Compare December 3, 2025 18:52
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (1)

19-20: Code duplication is acceptable here.

The _is_truthy function is duplicated from the openai package, which is acceptable since these packages must remain independent. Based on learnings, no shared utility extraction is needed.

🧹 Nitpick comments (1)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (1)

18-18: Inconsistent constant naming between packages.

This file uses TRACELOOP_TRACE_CONTENT (public), while the openai_agents package uses _TRACELOOP_TRACE_CONTENT (private, with underscore prefix). The PR description mentions renaming this constant for consistency, but it appears the rename wasn't applied here.

-TRACELOOP_TRACE_CONTENT = "TRACELOOP_TRACE_CONTENT"
+_TRACELOOP_TRACE_CONTENT = "TRACELOOP_TRACE_CONTENT"

Then update the reference on line 186 accordingly.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 32e4fc9 and 8f64e4e.

📒 Files selected for processing (4)
  • README.md (1 hunks)
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (9 hunks)
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (2 hunks)
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • README.md
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.py: Store API keys only in environment variables/secure vaults; never hardcode secrets in code
Use Flake8 for code linting and adhere to its rules

Files:

  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py
🧠 Learnings (1)
📚 Learning: 2025-12-02T21:09:48.690Z
Learnt from: duanyutong
Repo: traceloop/openllmetry PR: 3487
File: packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py:177-178
Timestamp: 2025-12-02T21:09:48.690Z
Learning: The opentelemetry-instrumentation-openai and opentelemetry-instrumentation-openai-agents packages must remain independent and not share code, so code duplication between them is acceptable.

Applied to files:

  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py
🧬 Code graph analysis (2)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (2)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (2)
  • dont_throw (52-78)
  • should_send_prompts (23-30)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (2)
  • dont_throw (132-160)
  • should_send_prompts (181-188)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (1)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (2)
  • should_send_prompts (181-188)
  • _is_truthy (177-178)
🔇 Additional comments (12)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (2)

186-188: Potential logic issue: override cannot disable content tracing.

The or operator means content will be traced if either condition is truthy. Since env_setting defaults to "true", setting the override to False won't disable tracing. If the intent is that the override should be able to disable tracing when explicitly set to false, consider:

-    return _is_truthy(env_setting) or _is_truthy(override)
+    if override is not None:
+        return _is_truthy(override)
+    return _is_truthy(env_setting)

If the current behavior (override can only enable, never disable) is intentional, please add a clarifying comment.


177-178: LGTM!

The _is_truthy helper correctly normalizes various input types (None, bool, string) for truthiness checks.

packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (2)

9-10: LGTM!

The constant is appropriately marked as private with the underscore prefix.


28-30: Same logic concern: override cannot disable content tracing.

Same issue as in the openai package—the or operator means the override can only enable tracing, not disable it when the environment variable defaults to "true". If both packages should behave identically (which they do now), consider whether this is the intended behavior.

packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (8)

236-236: LGTM!

Good pattern: calling should_send_prompts() once and storing in a Final variable ensures consistent behavior throughout the span lifecycle and avoids repeated function calls.


345-349: Consider gating tool call arguments as content.

Tool call arguments may contain sensitive user data passed to functions. These are emitted without the should_trace_content guard, unlike prompt and response content. If tool arguments should be treated as content for privacy purposes, they should also be gated:

-                                if tool_call.get('arguments'):
+                                if should_trace_content and tool_call.get('arguments'):
                                     args = tool_call['arguments']
                                     if not isinstance(args, str):
                                         args = json.dumps(args)
                                     otel_span.set_attribute(f"{prefix}.tool_calls.{j}.arguments", args)

The same consideration applies to lines 420-432 and 528-541 where tool call attributes are set.


3-3: LGTM!

The Final import supports the type annotation on line 236, correctly documenting that should_trace_content won't be reassigned.


14-14: LGTM!

Import of should_send_prompts from the local utils module enables the content tracing gate functionality.


302-305: Content gating correctly applied to prompt content.

The guard prevents emission of potentially sensitive prompt content when content tracing is disabled.


403-415: Content gating correctly applied to response outputs.

Both structured content (line 403) and direct text output (line 434) are properly guarded.

Also applies to: 434-439


470-484: Content gating correctly applied to legacy input path.

The legacy fallback path for input content is properly gated with should_trace_content.


512-524: Content gating correctly applied to legacy response path.

Both structured content (line 512) and direct text output (line 543) in the legacy path are properly guarded, maintaining consistency with the primary path.

Also applies to: 543-548

@duanyutong duanyutong force-pushed the fix-openai-agents-content branch 4 times, most recently from f4bca6b to 2e9ba35 Compare December 8, 2025 14:43
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (1)

181-188: Consider aligning other instrumentation packages with this improved pattern.

Multiple other instrumentation packages still use the old pattern without _is_truthy() normalization:

  • langchain, llamaindex, anthropic, groq, cohere, vertexai, bedrock, watsonx, replicate, ollama, mistralai, transformers, haystack

The new implementation provides better handling of edge cases (e.g., override set to string "false"). Consider updating these packages for consistency and improved robustness in a future PR.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 8f64e4e and 2e9ba35.

📒 Files selected for processing (5)
  • .gitignore (1 hunks)
  • CONTRIBUTING.md (1 hunks)
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (9 hunks)
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (2 hunks)
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.py: Store API keys only in environment variables/secure vaults; never hardcode secrets in code
Use Flake8 for code linting and adhere to its rules

Files:

  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py
🧠 Learnings (1)
📚 Learning: 2025-12-02T21:09:48.690Z
Learnt from: duanyutong
Repo: traceloop/openllmetry PR: 3487
File: packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py:177-178
Timestamp: 2025-12-02T21:09:48.690Z
Learning: The opentelemetry-instrumentation-openai and opentelemetry-instrumentation-openai-agents packages must remain independent and not share code, so code duplication between them is acceptable.

Applied to files:

  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py
🧬 Code graph analysis (2)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (1)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (2)
  • should_send_prompts (181-188)
  • _is_truthy (177-178)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (14)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (2)
  • _is_truthy (19-20)
  • should_send_prompts (23-30)
packages/opentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/utils.py (1)
  • should_send_prompts (49-52)
packages/opentelemetry-instrumentation-llamaindex/opentelemetry/instrumentation/llamaindex/utils.py (1)
  • should_send_prompts (32-35)
packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/utils.py (1)
  • should_send_prompts (28-31)
packages/opentelemetry-instrumentation-groq/opentelemetry/instrumentation/groq/utils.py (1)
  • should_send_prompts (25-28)
packages/opentelemetry-instrumentation-cohere/opentelemetry/instrumentation/cohere/utils.py (1)
  • should_send_prompts (38-41)
packages/opentelemetry-instrumentation-vertexai/opentelemetry/instrumentation/vertexai/utils.py (1)
  • should_send_prompts (11-14)
packages/opentelemetry-instrumentation-bedrock/opentelemetry/instrumentation/bedrock/utils.py (1)
  • should_send_prompts (36-39)
packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/utils.py (1)
  • should_send_prompts (14-17)
packages/opentelemetry-instrumentation-replicate/opentelemetry/instrumentation/replicate/utils.py (1)
  • should_send_prompts (11-14)
packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/utils.py (1)
  • should_send_prompts (11-14)
packages/opentelemetry-instrumentation-mistralai/opentelemetry/instrumentation/mistralai/utils.py (1)
  • should_send_prompts (11-14)
packages/opentelemetry-instrumentation-transformers/opentelemetry/instrumentation/transformers/utils.py (1)
  • should_send_prompts (11-14)
packages/opentelemetry-instrumentation-haystack/opentelemetry/instrumentation/haystack/utils.py (1)
  • should_send_prompts (21-24)
🪛 LanguageTool
CONTRIBUTING.md

[grammar] ~11-~11: The word “setup” is a noun. The verb is spelled with a white space.
Context: ...ocally. Run the following at repo root to setup the yarn dependencies. ```shell npm ...

(NOUN_VERB_CONFUSION)


[grammar] ~21-~21: The word “setup” is a noun. The verb is spelled with a white space.
Context: ... root. ```shell npx nx run opentelemetry-instrumentation-openai:install --with dev,t...

(NOUN_VERB_CONFUSION)

🔇 Additional comments (5)
.gitignore (1)

36-36: LGTM!

Adding .tool-versions to .gitignore is appropriate for version management tools like asdf.

packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (2)

177-178: LGTM!

The _is_truthy() helper provides robust normalization of string-based configuration values, handling common truthy representations ("true", "1", "yes", "on").


181-188: Verify the empty string behavior change is intentional.

The new implementation changes behavior when TRACELOOP_TRACE_CONTENT is explicitly set to an empty string:

  • Old behavior: (os.getenv(TRACELOOP_TRACE_CONTENT) or "true").lower() == "true" would default to "true" for empty strings (since empty strings are falsy in Python)
  • New behavior: os.getenv(TRACELOOP_TRACE_CONTENT, "true") returns "" when explicitly set to empty, then _is_truthy("") evaluates to False

This means explicitly setting TRACELOOP_TRACE_CONTENT="" will now disable content tracing, whereas it previously enabled it. Confirm this behavior is tested and aligns with the intended configuration semantics.

packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (2)

9-9: LGTM!

Introducing the module-level constant _TRACELOOP_TRACE_CONTENT improves maintainability and makes the environment variable name easier to update consistently.


23-30: LGTM!

The implementation now matches the openai package, providing consistent behavior across both packages. The explicit return type annotation and docstring improve code clarity.

Comment thread CONTRIBUTING.md
@duanyutong duanyutong force-pushed the fix-openai-agents-content branch from 2e9ba35 to 5f39b86 Compare December 8, 2025 16:41
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
CONTRIBUTING.md (1)

11-11: Fix verb usage: "setup" → "set up".

The word "setup" is a noun; the verb form requires a space.

-Run the following at repo root to setup the yarn dependencies.
+Run the following at repo root to set up the yarn dependencies.
🧹 Nitpick comments (1)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (1)

177-188: Implementation looks correct.

The _is_truthy helper robustly handles various truthy string representations and correctly returns False for None (since str(None) = "None"). The should_send_prompts logic correctly applies truthiness normalization to both the environment variable and the context override.

One minor inconsistency: this file uses TRACELOOP_TRACE_CONTENT (public constant) while the openai-agents package uses _TRACELOOP_TRACE_CONTENT (private with underscore). Consider aligning the naming convention if these are intended to be internal constants.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 2e9ba35 and 5f39b86.

📒 Files selected for processing (5)
  • .gitignore (1 hunks)
  • CONTRIBUTING.md (1 hunks)
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (9 hunks)
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (2 hunks)
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • .gitignore
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.py: Store API keys only in environment variables/secure vaults; never hardcode secrets in code
Use Flake8 for code linting and adhere to its rules

Files:

  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py
🧠 Learnings (2)
📚 Learning: 2025-12-02T21:09:48.690Z
Learnt from: duanyutong
Repo: traceloop/openllmetry PR: 3487
File: packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py:177-178
Timestamp: 2025-12-02T21:09:48.690Z
Learning: The opentelemetry-instrumentation-openai and opentelemetry-instrumentation-openai-agents packages must remain independent and not share code, so code duplication between them is acceptable.

Applied to files:

  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py
📚 Learning: 2025-08-21T04:01:31.783Z
Learnt from: prane-eth
Repo: traceloop/openllmetry PR: 3330
File: packages/opentelemetry-semantic-conventions-ai/pyproject.toml:11-11
Timestamp: 2025-08-21T04:01:31.783Z
Learning: The openllmetry repository uses a monorepo structure where individual packages like opentelemetry-semantic-conventions-ai do not maintain their own CHANGELOG.md files. There is a single CHANGELOG.md at the repository root level instead.

Applied to files:

  • CONTRIBUTING.md
🧬 Code graph analysis (2)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (2)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (1)
  • should_send_prompts (23-30)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (1)
  • should_send_prompts (181-188)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (3)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (2)
  • _is_truthy (19-20)
  • should_send_prompts (23-30)
packages/opentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/utils.py (1)
  • should_send_prompts (49-52)
packages/opentelemetry-instrumentation-haystack/opentelemetry/instrumentation/haystack/utils.py (1)
  • should_send_prompts (21-24)
🪛 LanguageTool
CONTRIBUTING.md

[grammar] ~11-~11: The word “setup” is a noun. The verb is spelled with a white space.
Context: ...ocally. Run the following at repo root to setup the yarn dependencies. ```shell npm ...

(NOUN_VERB_CONFUSION)

🔇 Additional comments (5)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (1)

9-30: LGTM!

The implementation is consistent with the openai package's should_send_prompts. The _is_truthy normalization ensures consistent handling of various truthy string values for both the environment variable and the context override.

Based on learnings, the code duplication between packages is acceptable since they must remain independent.

packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (4)

236-236: Good use of caching and type annotation.

Calling should_send_prompts() once and reusing the result via Final[bool] is efficient and provides good type safety.


345-349: Verify: Tool call arguments may contain sensitive data.

Content gating is applied to prompts and responses, but tool call arguments (lines 345-349, 425-432, 538-541) are still emitted regardless of should_trace_content. Tool arguments could contain sensitive function parameters depending on the use case.

If this is intentional (keeping tool metadata for observability while excluding actual content), please confirm. Otherwise, consider gating the arguments attribute similarly.


403-439: Content gating correctly applied to response outputs.

The gating logic consistently protects both content arrays (line 403) and direct text attributes (line 434) while allowing non-sensitive metadata like role and finish_reason to flow through.


470-484: Legacy path content gating looks good.

The legacy fallback path correctly gates prompt content emission, maintaining consistency with the main code path.

@duanyutong duanyutong force-pushed the fix-openai-agents-content branch from 5f39b86 to 9ae367e Compare December 22, 2025 21:51
@duanyutong duanyutong force-pushed the fix-openai-agents-content branch from 9ae367e to 4dc547e Compare January 2, 2026 14:19
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (1)

351-355: Consider gating tool call arguments by content tracing flag.

Tool call arguments at lines 355, 437, and 545 are traced without content gating. Since arguments may contain sensitive information (user data, PII, business logic parameters), they should probably be gated by should_trace_content similar to prompt and response content.

For example, at lines 351-355:

if tool_call.get('arguments'):
    args = tool_call['arguments']
    if not isinstance(args, str):
        args = json.dumps(args)
    otel_span.set_attribute(f"{prefix}.tool_calls.{j}.arguments", args)
🔎 Proposed fix to gate tool call arguments

For the prompt tool calls (lines 351-355):

-                            if tool_call.get('arguments'):
+                            if should_trace_content and tool_call.get('arguments'):
                                 args = tool_call['arguments']
                                 if not isinstance(args, str):
                                     args = json.dumps(args)
                                 otel_span.set_attribute(f"{prefix}.tool_calls.{j}.arguments", args)

For the completion tool calls (lines 434-437):

                                 otel_span.set_attribute(
                                     f"{GenAIAttributes.GEN_AI_COMPLETION}.{i}.tool_calls.0.name", tool_name)
-                                otel_span.set_attribute(
-                                    f"{GenAIAttributes.GEN_AI_COMPLETION}.{i}.tool_calls.0.arguments", arguments)
+                                if should_trace_content:
+                                    otel_span.set_attribute(
+                                        f"{GenAIAttributes.GEN_AI_COMPLETION}.{i}.tool_calls.0.arguments", arguments)
                                 otel_span.set_attribute(
                                     f"{GenAIAttributes.GEN_AI_COMPLETION}.{i}.tool_calls.0.id", tool_call_id)

Apply similar gating to line 545 in the legacy fallback path.

Also applies to: 434-437

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 9ae367e and 4dc547e.

📒 Files selected for processing (5)
  • .gitignore
  • CONTRIBUTING.md
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py
🚧 Files skipped from review as they are similar to previous changes (3)
  • .gitignore
  • CONTRIBUTING.md
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.py: Store API keys only in environment variables/secure vaults; never hardcode secrets in code
Use Flake8 for code linting and adhere to its rules

Files:

  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py
🧠 Learnings (1)
📚 Learning: 2025-12-02T21:09:48.690Z
Learnt from: duanyutong
Repo: traceloop/openllmetry PR: 3487
File: packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py:177-178
Timestamp: 2025-12-02T21:09:48.690Z
Learning: The opentelemetry-instrumentation-openai and opentelemetry-instrumentation-openai-agents packages must remain independent and not share code, so code duplication between them is acceptable.

Applied to files:

  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: Test Packages (3.11)
  • GitHub Check: Test Packages (3.12)
  • GitHub Check: Test Packages (3.10)
  • GitHub Check: Build Packages (3.11)
  • GitHub Check: Lint
🔇 Additional comments (6)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (2)

177-178: LGTM! Clean truthiness normalization.

The _is_truthy() helper correctly handles various input types (None, booleans, strings) and provides consistent truthiness evaluation across the codebase.

Based on learnings, code duplication between the openai and openai-agents packages is acceptable due to package independence requirements.


181-188: LGTM! Well-documented and consistent content tracing control.

The implementation correctly:

  • Adds explicit -> bool return type annotation
  • Documents that "content" includes both prompts and responses
  • Applies _is_truthy() normalization to both environment variable and context override for consistent handling
  • Uses OR logic to enable content tracing when either source is truthy
  • Defaults to "true" to preserve existing behavior
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (4)

3-3: LGTM! Proper imports for content tracing control.

The additions of Final to typing imports and should_send_prompts to the utils import are appropriate for implementing content tracing control.

Also applies to: 14-14


242-242: LGTM! Efficient caching of content tracing decision.

Using Final[bool] annotation and computing should_trace_content once per span end is an efficient approach that maintains type safety and signals the value shouldn't be reassigned.


308-311: LGTM! Consistent content gating for prompts and responses.

Content attributes (prompt content, response content, and text outputs) are correctly gated by should_trace_content, properly implementing the content tracing control mechanism.

Also applies to: 409-418, 440-445


476-490: LGTM! Consistent content gating in legacy fallback path.

The legacy fallback code correctly applies should_trace_content gating to prompt content (line 476) and response content/text (lines 518, 549), maintaining consistency with the primary code paths.

Note: Tool call arguments in this section (line 545) should also be gated as mentioned in the previous comment.

Also applies to: 518-554

@duanyutong duanyutong force-pushed the fix-openai-agents-content branch from 4dc547e to 102e684 Compare January 7, 2026 18:08
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (1)

318-355: Gate tool call arguments by should_trace_content.

Tool call arguments (line 355) are emitted unconditionally, while message content (line 308) and response output (lines 409, 440, 518, 549) are all gated by should_trace_content. Tool arguments can contain sensitive data (user inputs, PII), making them functionally equivalent to message content. The Groq instrumentation confirms this pattern—it explicitly removes tool arguments when should_send_prompts() is false. Wrap the tool argument emission at line 355 with if should_trace_content: to maintain consistency.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 4dc547e and 102e684.

📒 Files selected for processing (5)
  • .gitignore
  • CONTRIBUTING.md
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • CONTRIBUTING.md
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.py: Store API keys only in environment variables/secure vaults; never hardcode secrets in code
Use Flake8 for code linting and adhere to its rules

Files:

  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py
🧠 Learnings (1)
📚 Learning: 2025-12-02T21:09:48.690Z
Learnt from: duanyutong
Repo: traceloop/openllmetry PR: 3487
File: packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py:177-178
Timestamp: 2025-12-02T21:09:48.690Z
Learning: The opentelemetry-instrumentation-openai and opentelemetry-instrumentation-openai-agents packages must remain independent and not share code, so code duplication between them is acceptable.

Applied to files:

  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py
🧬 Code graph analysis (1)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (2)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (2)
  • dont_throw (52-78)
  • should_send_prompts (23-30)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (2)
  • dont_throw (132-160)
  • should_send_prompts (181-188)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: Test Packages (3.10)
  • GitHub Check: Test Packages (3.12)
  • GitHub Check: Test Packages (3.11)
  • GitHub Check: Build Packages (3.11)
  • GitHub Check: Lint
🔇 Additional comments (9)
.gitignore (1)

36-36: LGTM!

Standard gitignore entry for version manager files (e.g., asdf, mise).

packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (6)

3-3: LGTM!

The import additions are appropriate. Final ensures should_trace_content is treated as immutable within the function scope, and should_send_prompts provides centralized content tracing control.

Also applies to: 14-14


242-242: LGTM!

Computing should_trace_content once per span and marking it Final ensures consistent content gating throughout the function and prevents accidental mutation.


308-311: LGTM!

Content gating correctly checks both should_trace_content and content presence before emitting prompt content.


409-421: LGTM!

Content gating for response output messages correctly checks should_trace_content along with attribute existence and content presence.


440-445: LGTM!

Content gating is consistently applied to both direct text content and legacy prompt data handling.

Also applies to: 476-490


518-530: LGTM!

Content gating in the legacy fallback path is consistent with the modern code path, ensuring uniform behavior across different response formats.

Also applies to: 549-554

packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (2)

177-178: LGTM!

The _is_truthy helper correctly handles common truthy representations ("true", "1", "yes", "on") with case-insensitive matching and whitespace trimming. Edge cases like None, empty strings, and boolean values are handled correctly.


181-188: LGTM!

The refactored should_send_prompts() provides consistent behavior by normalizing both environment variable and context override through _is_truthy. The type annotation and docstring improve clarity. Default of "true" ensures content tracing is enabled by default, and the OR logic allows either source to enable tracing.

@duanyutong duanyutong force-pushed the fix-openai-agents-content branch 4 times, most recently from 25630e2 to 8a590c0 Compare January 28, 2026 20:15
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (1)

607-670: Gate content extraction functions with content_tracing_enabled flag.

The content_tracing_enabled flag (line 610) gates tool specification extraction but doesn't gate calls to _extract_prompt_attributes and _extract_response_attributes. Both functions unconditionally emit gen_ai.prompt.*.content and gen_ai.completion.*.content attributes. This creates inconsistent behavior where tool specifications respect the content tracing flag while prompt/completion content does not.

Pass the flag to these extraction functions or wrap their calls in conditionals to ensure consistent content emission behavior.

🧹 Nitpick comments (1)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (1)

3-3: Unused import: Final.

Final is imported but not used anywhere in this file.

-from typing import Dict, Any, Final
+from typing import Dict, Any

@duanyutong duanyutong force-pushed the fix-openai-agents-content branch 3 times, most recently from 0b67ea4 to 1d8d4c9 Compare January 28, 2026 20:24
@duanyutong duanyutong force-pushed the fix-openai-agents-content branch 3 times, most recently from a0fbb17 to c0cc2cf Compare January 28, 2026 20:25
Copy link
Copy Markdown
Contributor

@galkleinman galkleinman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@duanyutong duanyutong force-pushed the fix-openai-agents-content branch from c0cc2cf to 571b070 Compare January 29, 2026 15:33
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (2)

612-664: Content tracing flag not applied to prompt and response extraction.

The trace_content flag is computed but not used to gate _extract_prompt_attributes (line 619) or _extract_response_attributes (line 663). This means prompts and completions are still emitted as span attributes regardless of the TRACELOOP_TRACE_CONTENT setting, defeating the PR's objective.

🐛 Proposed fix to gate content extraction
             if span_data and (
                 type(span_data).__name__ == "ResponseSpanData"
                 or isinstance(span_data, GenerationSpanData)
             ):
                 # Extract prompt data from input
-                input_data = getattr(span_data, "input", [])
-                _extract_prompt_attributes(otel_span, input_data)
+                if trace_content:
+                    input_data = getattr(span_data, "input", [])
+                    _extract_prompt_attributes(otel_span, input_data)

                 # Add function/tool specifications to the request using OpenAI semantic conventions
                 response = getattr(span_data, "response", None)
                 if (
                     response
                     and hasattr(response, "tools")
                     and response.tools
+                    and trace_content
                 ):
                     # Extract tool specifications
                     ...

-                if response:
+                if response and trace_content:
                     model_settings = _extract_response_attributes(otel_span, response)
                     self._last_model_settings = model_settings

667-674: Legacy fallback path also needs content gating.

The legacy fallback for other span types also extracts prompts and responses without checking trace_content.

🐛 Proposed fix for legacy fallback
             # Legacy fallback for other span types
             elif span_data:
-                input_data = getattr(span_data, "input", [])
-                _extract_prompt_attributes(otel_span, input_data)
+                if trace_content:
+                    input_data = getattr(span_data, "input", [])
+                    _extract_prompt_attributes(otel_span, input_data)

                 response = getattr(span_data, "response", None)
-                if response:
+                if response and trace_content:
                     model_settings = _extract_response_attributes(otel_span, response)
                     self._last_model_settings = model_settings

@duanyutong duanyutong force-pushed the fix-openai-agents-content branch from b969cea to abb5273 Compare January 29, 2026 15:49
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (1)

109-116: Inconsistent gating: structural metadata should still be emitted when content is disabled.

When trace_content=False, the entire function_call_output branch is skipped, so role="tool" and tool_call_id are never set. This is inconsistent with the standard format handling (lines 87-92, 118-130) where role and tool_call_id are always extracted/emitted, and only the content is gated at line 123.

For consistency, the role and tool_call_id should be set unconditionally, with only the output content gated.

Proposed fix
-            elif (
-                msg_type == "function_call_output"
-                and trace_content
-            ):
+            elif msg_type == "function_call_output":
                 # Tool outputs are tool messages
                 role = "tool"
-                content = msg.get("output")
+                if trace_content:
+                    content = msg.get("output")
                 tool_call_id = msg.get("call_id")
🤖 Fix all issues with AI agents
In
`@packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py`:
- Around line 672-686: The realtime-specific handlers are currently unreachable
because the generic `elif span_data:` branch runs first; move the realtime
condition checks for `isinstance(span_data, SpeechSpanData)`,
`TranscriptionSpanData`, and `SpeechGroupSpanData` so they appear before the
generic `elif span_data:` fallback in the same control flow. Specifically,
reorder the blocks so the `if _has_realtime_spans and SpeechSpanData and
isinstance(span_data, SpeechSpanData) and trace_content` (and the similar
`TranscriptionSpanData` / `SpeechGroupSpanData` checks) run earlier, ensuring
`_extract_prompt_attributes` and `_extract_response_attributes` are only called
by the legacy fallback for non-realtime span types and that realtime spans are
processed by their dedicated handlers which use `trace_content`, `otel_span`,
and `span_data` correctly.

Comment on lines 672 to +686
# Legacy fallback for other span types
elif span_data:
input_data = getattr(span_data, "input", [])
_extract_prompt_attributes(otel_span, input_data)
_extract_prompt_attributes(otel_span, input_data, trace_content)

response = getattr(span_data, "response", None)
if response:
model_settings = _extract_response_attributes(otel_span, response)
model_settings = _extract_response_attributes(otel_span, response, trace_content)
self._last_model_settings = model_settings

elif (
_has_realtime_spans
and SpeechSpanData
and isinstance(span_data, SpeechSpanData)
and trace_content
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot Jan 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, let's read the specific file and lines mentioned in the review
cd packages/opentelemetry-instrumentation-openai-agents && \
wc -l opentelemetry/instrumentation/openai_agents/_hooks.py

Repository: traceloop/openllmetry

Length of output: 122


🏁 Script executed:

# Read the code around lines 672-741 to understand the control flow
cd packages/opentelemetry-instrumentation-openai-agents && \
sed -n '660,750p' opentelemetry/instrumentation/openai_agents/_hooks.py | cat -n

Repository: traceloop/openllmetry

Length of output: 4839


🏁 Script executed:

# Search for test files related to spans and realtime handling
find packages/opentelemetry-instrumentation-openai-agents -name "*.py" -path "*/test*" -type f | head -20

Repository: traceloop/openllmetry

Length of output: 633


🏁 Script executed:

# Search for span_data type checks and how they're handled in tests
rg -n "SpeechSpanData|TranscriptionSpanData|SpeechGroupSpanData" packages/opentelemetry-instrumentation-openai-agents -t py -B 2 -A 2

Repository: traceloop/openllmetry

Length of output: 25242


🏁 Script executed:

# Get more context around lines 650-750 to see the complete control flow
cd packages/opentelemetry-instrumentation-openai-agents && \
sed -n '650,745p' opentelemetry/instrumentation/openai_agents/_hooks.py | cat -n

Repository: traceloop/openllmetry

Length of output: 5149


🏁 Script executed:

# Search for the method that contains lines 672-741 to understand its full structure
cd packages/opentelemetry-instrumentation-openai-agents && \
sed -n '620,700p' opentelemetry/instrumentation/openai_agents/_hooks.py | cat -n

Repository: traceloop/openllmetry

Length of output: 4701


🏁 Script executed:

# Read the test that checks SpeechSpanData handling
cd packages/opentelemetry-instrumentation-openai-agents && \
sed -n '73,110p' tests/test_realtime.py | cat -n

Repository: traceloop/openllmetry

Length of output: 1876


🏁 Script executed:

# Check what attributes are set in the test assertions
cd packages/opentelemetry-instrumentation-openai-agents && \
sed -n '100,135p' tests/test_realtime.py | cat -n

Repository: traceloop/openllmetry

Length of output: 1878


🏁 Script executed:

# Find the method that contains lines 672-741
cd packages/opentelemetry-instrumentation-openai-agents && \
sed -n '600,680p' opentelemetry/instrumentation/openai_agents/_hooks.py | grep -n "def \|class " | tail -5

Repository: traceloop/openllmetry

Length of output: 99


🏁 Script executed:

# Get method signature and context for line 672
cd packages/opentelemetry-instrumentation-openai-agents && \
sed -n '600,675p' opentelemetry/instrumentation/openai_agents/_hooks.py | cat -n | grep -A 5 -B 5 "672"

Repository: traceloop/openllmetry

Length of output: 47


🏁 Script executed:

# Read test assertions more completely to see what attributes the tests verify
cd packages/opentelemetry-instrumentation-openai-agents && \
sed -n '107,125p' tests/test_realtime.py

Repository: traceloop/openllmetry

Length of output: 880


🏁 Script executed:

# Find the on_span_end method
cd packages/opentelemetry-instrumentation-openai-agents && \
rg -n "def on_span_end" opentelemetry/instrumentation/openai_agents/_hooks.py

Repository: traceloop/openllmetry

Length of output: 101


🏁 Script executed:

# Get the full on_span_end method
cd packages/opentelemetry-instrumentation-openai-agents && \
sed -n '590,745p' opentelemetry/instrumentation/openai_agents/_hooks.py | cat -n

Repository: traceloop/openllmetry

Length of output: 8247


🏁 Script executed:

# Check what attributes are actually verified in the test
cd packages/opentelemetry-instrumentation-openai-agents && \
sed -n '107,172p' tests/test_realtime.py | grep "assert.*attributes"

Repository: traceloop/openllmetry

Length of output: 291


🏁 Script executed:

# Check if realtime attributes (GEN_AI_PROMPT, GEN_AI_COMPLETION) are tested
cd packages/opentelemetry-instrumentation-openai-agents && \
rg "GEN_AI_PROMPT|GEN_AI_COMPLETION" tests/test_realtime.py

Repository: traceloop/openllmetry

Length of output: 47


🏁 Script executed:

# Let's trace the issue - check what span_data types have input/output attributes
cd packages/opentelemetry-instrumentation-openai-agents && \
sed -n '28,50p' tests/test_realtime.py | cat -n

Repository: traceloop/openllmetry

Length of output: 789


🏁 Script executed:

# Check what MockSpanData contains and what attributes are set in tests
cd packages/opentelemetry-instrumentation-openai-agents && \
sed -n '1,30p' tests/test_realtime.py | cat -n

Repository: traceloop/openllmetry

Length of output: 1185


🏁 Script executed:

# Let's look at the complete MockSpanData and MockAgentSpan
cd packages/opentelemetry-instrumentation-openai-agents && \
sed -n '1,70p' tests/test_realtime.py | cat -n

Repository: traceloop/openllmetry

Length of output: 2494


🏁 Script executed:

# Run the realtime test to see if it actually passes
cd packages/opentelemetry-instrumentation-openai-agents && python -m pytest tests/test_realtime.py::TestRealtimeSpans::test_speech_span_start_creates_otel_span -v 2>&1 | head -50

Repository: traceloop/openllmetry

Length of output: 475


🏁 Script executed:

# Let's verify the control flow issue by examining what the fallback handler does vs realtime handlers
cd packages/opentelemetry-instrumentation-openai-agents && \
sed -n '84,92p' opentelemetry/instrumentation/openai_agents/_hooks.py

Repository: traceloop/openllmetry

Length of output: 357


🏁 Script executed:

# And compare with what the realtime handler does
cd packages/opentelemetry-instrumentation-openai-agents && \
sed -n '99,114p' opentelemetry/instrumentation/openai_agents/_hooks.py

Repository: traceloop/openllmetry

Length of output: 669


🏁 Script executed:

# Check if _extract_prompt_attributes is the same as what realtime handlers do
cd packages/opentelemetry-instrumentation-openai-agents && \
rg -n "_extract_prompt_attributes" opentelemetry/instrumentation/openai_agents/_hooks.py -A 10 | head -30

Repository: traceloop/openllmetry

Length of output: 1402


🏁 Script executed:

# Let's verify the control flow issue more clearly - check what _extract_prompt_attributes does with non-list input
cd packages/opentelemetry-instrumentation-openai-agents && \
sed -n '48,90p' opentelemetry/instrumentation/openai_agents/_hooks.py | cat -n

Repository: traceloop/openllmetry

Length of output: 1660


🏁 Script executed:

# Now let's check if any tests verify the GEN_AI_PROMPT attributes being set for realtime spans
cd packages/opentelemetry-instrumentation-openai-agents && \
grep -r "GEN_AI_PROMPT\|GEN_AI_COMPLETION" tests/

Repository: traceloop/openllmetry

Length of output: 680


🏁 Script executed:

# Let's create a simple test to verify the actual behavior
cd packages/opentelemetry-instrumentation-openai-agents && \
python3 << 'EOF'
# Simulate what happens with a string input
class MockSpanData:
    def __init__(self, **kwargs):
        for key, value in kwargs.items():
            setattr(self, key, value)

speech_data = MockSpanData(input="Hello, world!")

# Simulate the fallback handler path
input_data = getattr(speech_data, "input", [])
print(f"input_data type: {type(input_data)}")
print(f"input_data value: {input_data}")

# What _extract_prompt_attributes expects to iterate
for i, message in enumerate(input_data):
    print(f"Message {i}: {message}")
    # This will treat each character as a message!
EOF

Repository: traceloop/openllmetry

Length of output: 299


🏁 Script executed:

# Verify the attributes that would actually be set
cd packages/opentelemetry-instrumentation-openai-agents && \
sed -n '73,103p' opentelemetry/instrumentation/openai_agents/_hooks.py | cat -n

Repository: traceloop/openllmetry

Length of output: 1350


Realtime span handling is unreachable due to the generic elif span_data: fallback.

The elif span_data: at line 673 will match any truthy span_data object, including SpeechSpanData, TranscriptionSpanData, and SpeechGroupSpanData instances. This prevents the realtime-specific isinstance() checks at lines 682+, 705+, and 728+ from ever executing.

When realtime spans are processed by the fallback handler, _extract_prompt_attributes receives string input (e.g., "Hello, world!") and iterates over it character-by-character, treating each character as a separate message. The proper realtime handlers—which correctly process string input as single content items with attributes like GEN_AI_PROMPT.0.content and GEN_AI_PROMPT.0.role—never execute. The trace_content parameter additions to these handlers are ineffective.

Move the realtime span conditions (isinstance(span_data, SpeechSpanData), etc.) before the generic elif span_data: fallback.

🤖 Prompt for AI Agents
In
`@packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py`
around lines 672 - 686, The realtime-specific handlers are currently unreachable
because the generic `elif span_data:` branch runs first; move the realtime
condition checks for `isinstance(span_data, SpeechSpanData)`,
`TranscriptionSpanData`, and `SpeechGroupSpanData` so they appear before the
generic `elif span_data:` fallback in the same control flow. Specifically,
reorder the blocks so the `if _has_realtime_spans and SpeechSpanData and
isinstance(span_data, SpeechSpanData) and trace_content` (and the similar
`TranscriptionSpanData` / `SpeechGroupSpanData` checks) run earlier, ensuring
`_extract_prompt_attributes` and `_extract_response_attributes` are only called
by the legacy fallback for non-realtime span types and that realtime spans are
processed by their dedicated handlers which use `trace_content`, `otel_span`,
and `span_data` correctly.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @galkleinman this comment seems unrelated to my change in this particular PR. I'll defer to you on that.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!

@galkleinman galkleinman merged commit f941d0c into traceloop:main Jan 29, 2026
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants